Cluster Ensembles, Majority Vote, Voter Eligibility and Privileged Voters
نویسندگان
چکیده
منابع مشابه
"Good" and "Bad" Diversity in Majority Vote Ensembles
Although diversity in classifier ensembles is desirable, its relationship with the ensemble accuracy is not straightforward. Here we derive a decomposition of the majority vote error into three terms: average individual accuracy, “good” diversity and “bad diversity”. The good diversity term is taken out of the individual error whereas the bad diversity term is added to it. We relate the two div...
متن کاملDo Voters Vote Sincerely?∗
In this paper we address the following questions: (i) To what extent is the hypothesis that voters vote sincerely testable or falsifiable? And (ii) in environments where the hypothesis is falsifiable, to what extent is the observed behavior of voters consistent with sincere voting? We show that using data only on how individuals vote in a single election, the hypothesis that voters vote sincere...
متن کاملThe Dark Side of the Vote: Biased Voters, Social Information, and Information Aggregation Through Majority Voting1
We experimentally investigate information aggregation through majority voting when some voters are biased. In such situations, majority voting can have a “dark side,”that is, result in groups making choices inferior to those made by individuals acting alone. In line with theoretical predictions, information on the popularity of policy choices is beneficial when a minority of voters is biased, b...
متن کاملMajority vote following a debate
Voters determine their preferences over alternatives based on cases (or arguments) that are raised in the public debate. Each voter is characterized by a matrix, measuring how much support each case lends to each alternative, and her ranking is additive in cases. We show that the majority vote in such a society can be any function from sets of cases to binary relations over alternatives. A simi...
متن کاملVote-boosting ensembles
Abstract—Vote-boosting is a sequential ensemble learning method in which individual classifiers are built on different weighted versions of the training data. To build a new classifier, the weight of each training instance is determined as a function of the disagreement rate of the current ensemble predictions for that particular instance. Experiments using the symmetric beta distribution as th...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: International Journal of Machine Learning and Computing
سال: 2014
ISSN: 2010-3700
DOI: 10.7763/ijmlc.2014.v4.424